2,461 research outputs found

    Adversarial Robustness: Softmax versus Openmax

    Full text link
    Deep neural networks (DNNs) provide state-of-the-art results on various tasks and are widely used in real world applications. However, it was discovered that machine learning models, including the best performing DNNs, suffer from a fundamental problem: they can unexpectedly and confidently misclassify examples formed by slightly perturbing otherwise correctly recognized inputs. Various approaches have been developed for efficiently generating these so-called adversarial examples, but those mostly rely on ascending the gradient of loss. In this paper, we introduce the novel logits optimized targeting system (LOTS) to directly manipulate deep features captured at the penultimate layer. Using LOTS, we analyze and compare the adversarial robustness of DNNs using the traditional Softmax layer with Openmax, which was designed to provide open set recognition by defining classes derived from deep representations, and is claimed to be more robust to adversarial perturbations. We demonstrate that Openmax provides less vulnerable systems than Softmax to traditional attacks, however, we show that it can be equally susceptible to more sophisticated adversarial generation techniques that directly work on deep representations.Comment: Accepted to British Machine Vision Conference (BMVC) 201

    Are Accuracy and Robustness Correlated?

    Full text link
    Machine learning models are vulnerable to adversarial examples formed by applying small carefully chosen perturbations to inputs that cause unexpected classification errors. In this paper, we perform experiments on various adversarial example generation approaches with multiple deep convolutional neural networks including Residual Networks, the best performing models on ImageNet Large-Scale Visual Recognition Challenge 2015. We compare the adversarial example generation techniques with respect to the quality of the produced images, and measure the robustness of the tested machine learning models to adversarial examples. Finally, we conduct large-scale experiments on cross-model adversarial portability. We find that adversarial examples are mostly transferable across similar network topologies, and we demonstrate that better machine learning models are less vulnerable to adversarial examples.Comment: Accepted for publication at ICMLA 201

    Toward Open-Set Face Recognition

    Full text link
    Much research has been conducted on both face identification and face verification, with greater focus on the latter. Research on face identification has mostly focused on using closed-set protocols, which assume that all probe images used in evaluation contain identities of subjects that are enrolled in the gallery. Real systems, however, where only a fraction of probe sample identities are enrolled in the gallery, cannot make this closed-set assumption. Instead, they must assume an open set of probe samples and be able to reject/ignore those that correspond to unknown identities. In this paper, we address the widespread misconception that thresholding verification-like scores is a good way to solve the open-set face identification problem, by formulating an open-set face identification protocol and evaluating different strategies for assessing similarity. Our open-set identification protocol is based on the canonical labeled faces in the wild (LFW) dataset. Additionally to the known identities, we introduce the concepts of known unknowns (known, but uninteresting persons) and unknown unknowns (people never seen before) to the biometric community. We compare three algorithms for assessing similarity in a deep feature space under an open-set protocol: thresholded verification-like scores, linear discriminant analysis (LDA) scores, and an extreme value machine (EVM) probabilities. Our findings suggest that thresholding EVM probabilities, which are open-set by design, outperforms thresholding verification-like scores.Comment: Accepted for Publication in CVPR 2017 Biometrics Worksho

    Large-Scale Open-Set Classification Protocols for ImageNet

    Full text link
    Open-Set Classification (OSC) intends to adapt closed-set classification models to real-world scenarios, where the classifier must correctly label samples of known classes while rejecting previously unseen unknown samples. Only recently, research started to investigate on algorithms that are able to handle these unknown samples correctly. Some of these approaches address OSC by including into the training set negative samples that a classifier learns to reject, expecting that these data increase the robustness of the classifier on unknown classes. Most of these approaches are evaluated on small-scale and low-resolution image datasets like MNIST, SVHN or CIFAR, which makes it difficult to assess their applicability to the real world, and to compare them among each other. We propose three open-set protocols that provide rich datasets of natural images with different levels of similarity between known and unknown classes. The protocols consist of subsets of ImageNet classes selected to provide training and testing data closer to real-world scenarios. Additionally, we propose a new validation metric that can be employed to assess whether the training of deep learning models addresses both the classification of known samples and the rejection of unknown samples. We use the protocols to compare the performance of two baseline open-set algorithms to the standard SoftMax baseline and find that the algorithms work well on negative samples that have been seen during training, and partially on out-of-distribution detection tasks, but drop performance in the presence of samples from previously unseen unknown classes

    Cross- and in-plane thermal conductivity of AlN thin films measured using differential 3-omega method

    Get PDF
    Thickness dependency and interfacial structure effects on thermal properties of AlN thin films were systematically investigated by characterizing cross-plane and in-plane thermal conductivities, crystal structures, chemical compositions, surface morphologies and interfacial structures using an extended differential 3ω method, X-ray diffraction (XRD) analysis, X-ray photoelectron spectroscopy, atomic force microscopy (AFM) and transmission electron microscopy. AlN thin films with various thicknesses from 100 to 1000 nm were deposited on p-type doped silicon substrates using a radio frequency reactive magnetron sputtering process. Results revealed that both the cross- and in-plane thermal conductivities of the AlN thin films were significantly smaller than those of the AlN in a bulk form. The thermal conductivities of the AlN thin films were strongly dependent on the film thickness, in both the cross- and in-plane directions. Both the XRD and AFM results indicated that the grain size significantly affected the thermal conductivity of the films due to the scattering effects from the grain boundary

    Mask-based dual-axes tomoholography using soft x-rays

    Get PDF
    We explore tomographic mask-based Fourier transform x-ray holography with respect to the use of a thin slit as a reference wave source. This imaging technique exclusively uses the interference between the waves scattered by the object and the slit simplifying the experimental realization and ensuring high data quality. Furthermore, we introduce a second reference slit to rotate the sample around a second axis and to record a dual-axes tomogram. Compared to a single-axis tomogram, the reconstruction artifacts are decreased in accordance with the reduced missing data wedge. Two demonstration experiments are performed where test structures are imaged with a lateral resolution below 100 nm
    corecore